31 research outputs found

    Generalized linear mixing model accounting for endmember variability

    Full text link
    Endmember variability is an important factor for accurately unveiling vital information relating the pure materials and their distribution in hyperspectral images. Recently, the extended linear mixing model (ELMM) has been proposed as a modification of the linear mixing model (LMM) to consider endmember variability effects resulting mainly from illumination changes. In this paper, we further generalize the ELMM leading to a new model (GLMM) to account for more complex spectral distortions where different wavelength intervals can be affected unevenly. We also extend the existing methodology to jointly estimate the variability and the abundances for the GLMM. Simulations with real and synthetic data show that the unmixing process can benefit from the extra flexibility introduced by the GLMM

    A new adaptive algorithm for video super-resolution with improved outlier handling capability

    Get PDF
    Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro Tecnológico, Programa de Pós-Graduação em Engenharia Elétrica, Florianópolis, 2016.Abstract : Super resolution reconstruction (SRR) is a technique that consists basically in combining multiple low resolution images from a single scene in order to create an image with higher resolution. The main characteristics considered in the evaluation of SRR algorithms performance are the resulting image quality, its robustness to outliers and its computational cost. Among the super resolution algorithms present in the literature, the R-LMS has a very small computational cost, making it suitable for real-time operation. However, like many SRR techniques the R-LMS algorithm is also highly susceptible to outliers, which can lead the reconstructed image quality to be of lower quality than the low resolution observations. Although robust techniques have been proposed to mitigate this problem, the computational cost associated with even the simpler algorithms is not comparable to that of the R-LMS, making real-time operation impractical. It is therefore desirable to devise new algorithms that offer a better compromise between quality, robustness and computational cost. In this work, a new SRR technique based on the R-LMS algorithm is proposed. Based on the proximal-point cost function representation of the gradient descent iterative equation, an intuitive interpretation of the R-LMS algorithm behavior is obtained, both in ideal conditions and in the presence of outliers. Using a statistical model for the innovation outliers, a new regularization is then proposed to increase the algorithm robustness by allowing faster convergence on the subspace corresponding to the innovations while at the same time preserving the estimated image details. Two new algorithms are then derived. Computer simulations have shown that the new algorithms deliver a performance comparable to that of the R-LMS in the absence of outliers, and a significantly better performance in the presence of outliers, both quantitatively and visually. The computational cost of the proposed solution remained comparable to that of the R-LMS.Reconstrução com super resolução (SRR - Super resolution reconstruction) é uma técnica que consiste basicamente em combinar múltiplas imagens de baixa resolução a fim de formar uma única imagem com resolução superior. As principais características consideradas na avaliação de algoritmos de SRR são a qualidade da imagem reconstruída, sua robustez a outliers e o custo computacional associado. Uma maior qualidade nas imagens reconstruídas implica em um maior aumento efetivo na resolução das mesmas. Uma maior robustez, por outro lado, implica que um resultado de boa qualidade é obtido mesmo quando as imagens processadas não seguem fielmente o modelo matemático adotado. O custo computacional, por sua vez, é extremamente relevante em aplicações de SRR, dado que a dimensão do problema é extremamente grande. Uma das principais aplicações da SRR consiste na reconstrução de sequências de vídeo. De modo a facilitar o processamento em tempo real, o qual é um requisito frequente para aplicações de SRR de vídeo, algorítmos iterativos foram propostos, os quais processam apenas uma imagem a cada instante de tempo, utilizando informações presentes nas estimativas obtidas em instantes de tempo anteriores. Dentre os algoritmos de super resolução iterativos presentes na literatura, o R-LMS possui um custo computacional extremamente baixo, além de fornecer uma reconstrução com qualidade competitiva. Apesar disso, assim como grande parte das técnicas de SRR existentes o R-LMS é bastante suscetível a presença de outliers, os quais podem tornar a qualidade das imagens reconstruídas inferior àquela das observações de baixa resolução. A fim de mitigar esse problema, técnicas de SRR robusta foram propostas na literatura. Não obstante, mesmo o custo computacional dos algoritmos robustos mais simples não é comparável àquele do R-LMS, tornando o processamento em tempo real infactível. Deseja-se portanto desenvolver novos algoritmos que ofereçam um melhor compromisso entre qualidade, robustez e custo computacional. Neste trabalho uma nova técnica de SRR baseada no algoritmo R-LMS é proposta. Com base na representação da função custo do ponto proximal para a equação iterativa do método do gradiente, uma interpretação intuitiva para o comportamento do algoritmo R-LMS é obtida tanto para sua operação em condições ideais quanto na presença de outliers do tipo inovação, os quais representam variações significativas na cena entre frames adjacentes de uma sequência de vídeo. É demonstrado que o problema apresentado pelo R-LMS quanto a robustez à outliers de inovação se deve, principalmente, a sua baixa taxa de convergência. Além disso, um balanço direto pôde ser observado entre a rapidez da taxa de convergência e a preservação das informações estimadas em instantes de tempo anteriores. Desse modo, torna-se inviável obter, simultaneamente, uma boa qualidade no processamento de sequências bem comportadas e uma boa robustez na presença de inovações de grande porte. Desse modo, tem-se como objetivo projetar um algoritmo voltado à reconstrução de sequências de vídeo em tempo real que apresente uma maior robustez à outliers de grande porte, sem comprometer a preservação da informação estimada a partir da sequência de baixa resolução. Utilizando um modelo estatístico para os outliers provindos de inovações, uma nova regularização é proposta a fim de aumentar a robustez do algoritmo, permitindo simultaneamente uma convergência mais rápida no subespaço da imagem correspondente às inovações e a preservação dos detalhes previamente estimados. A partir disso dois novos algoritmos são então derivados. A nova regularização proposta penaliza variações entre estimativas adjacentes na sequência de vídeo em um subespaço aproximadamente ortogonal ao conteúdo das inovações. Verificou-se que o subespaço da imagem no qual a inovação contém menos energia é precisamente onde estão contidos os detalhes da imagem. Isso mostra que a regularização proposta, além de levar a uma maior robustez, também implica na preservação dos detalhes estimados na sequência de vídeo em instantes de tempo anteriores. Simulações computacionais mostram que apesar da solução proposta não levar a melhorias significativas no desempenho do algoritmo sob condições próximas às ideais, quando outliers estão presentes na sequência de imagens o método proposto superou consideravelmente o desempenho apresentado pelo R-LMS, tanto quantitativamente quanto visualmente. O custo computacional da solução proposta manteve-se comparável àquele do algoritmo R-LMS

    Dynamical Hyperspectral Unmixing with Variational Recurrent Neural Networks

    Full text link
    Multitemporal hyperspectral unmixing (MTHU) is a fundamental tool in the analysis of hyperspectral image sequences. It reveals the dynamical evolution of the materials (endmembers) and of their proportions (abundances) in a given scene. However, adequately accounting for the spatial and temporal variability of the endmembers in MTHU is challenging, and has not been fully addressed so far in unsupervised frameworks. In this work, we propose an unsupervised MTHU algorithm based on variational recurrent neural networks. First, a stochastic model is proposed to represent both the dynamical evolution of the endmembers and their abundances, as well as the mixing process. Moreover, a new model based on a low-dimensional parametrization is used to represent spatial and temporal endmember variability, significantly reducing the amount of variables to be estimated. We propose to formulate MTHU as a Bayesian inference problem. However, the solution to this problem does not have an analytical solution due to the nonlinearity and non-Gaussianity of the model. Thus, we propose a solution based on deep variational inference, in which the posterior distribution of the estimated abundances and endmembers is represented by using a combination of recurrent neural networks and a physically motivated model. The parameters of the model are learned using stochastic backpropagation. Experimental results show that the proposed method outperforms state of the art MTHU algorithms

    A Low-rank Tensor Regularization Strategy for Hyperspectral Unmixing

    Full text link
    Tensor-based methods have recently emerged as a more natural and effective formulation to address many problems in hyperspectral imaging. In hyperspectral unmixing (HU), low-rank constraints on the abundance maps have been shown to act as a regularization which adequately accounts for the multidimensional structure of the underlying signal. However, imposing a strict low-rank constraint for the abundance maps does not seem to be adequate, as important information that may be required to represent fine scale abundance behavior may be discarded. This paper introduces a new low-rank tensor regularization that adequately captures the low-rank structure underlying the abundance maps without hindering the flexibility of the solution. Simulation results with synthetic and real data show that the the extra flexibility introduced by the proposed regularization significantly improves the unmixing results

    Deep Hyperspectral and Multispectral Image Fusion with Inter-image Variability

    Full text link
    Hyperspectral and multispectral image fusion allows us to overcome the hardware limitations of hyperspectral imaging systems inherent to their lower spatial resolution. Nevertheless, existing algorithms usually fail to consider realistic image acquisition conditions. This paper presents a general imaging model that considers inter-image variability of data from heterogeneous sources and flexible image priors. The fusion problem is stated as an optimization problem in the maximum a posteriori framework. We introduce an original image fusion method that, on the one hand, solves the optimization problem accounting for inter-image variability with an iteratively reweighted scheme and, on the other hand, that leverages light-weight CNN-based networks to learn realistic image priors from data. In addition, we propose a zero-shot strategy to directly learn the image-specific prior of the latent images in an unsupervised manner. The performance of the algorithm is illustrated with real data subject to inter-image variability.Comment: IEEE Trans. Geosci. Remote sens., to be published. Manuscript submitted August 23, 2022; revised Dec. 15, 2022, and Mar. 13, 2023; and accepted Apr. 07, 202

    Super-Resolution for Hyperspectral and Multispectral Image Fusion Accounting for Seasonal Spectral Variability

    Full text link
    Image fusion combines data from different heterogeneous sources to obtain more precise information about an underlying scene. Hyperspectral-multispectral (HS-MS) image fusion is currently attracting great interest in remote sensing since it allows the generation of high spatial resolution HS images, circumventing the main limitation of this imaging modality. Existing HS-MS fusion algorithms, however, neglect the spectral variability often existing between images acquired at different time instants. This time difference causes variations in spectral signatures of the underlying constituent materials due to different acquisition and seasonal conditions. This paper introduces a novel HS-MS image fusion strategy that combines an unmixing-based formulation with an explicit parametric model for typical spectral variability between the two images. Simulations with synthetic and real data show that the proposed strategy leads to a significant performance improvement under spectral variability and state-of-the-art performance otherwise

    Deep Generative Models for Library Augmentation in Multiple Endmember Spectral Mixture Analysis

    Full text link
    Multiple Endmember Spectral Mixture Analysis (MESMA) is one of the leading approaches to perform spectral unmixing (SU) considering variability of the endmembers (EMs). It represents each EM in the image using libraries of spectral signatures acquired a priori. However, existing spectral libraries are often small and unable to properly capture the variability of each EM in practical scenes, which compromises the performance of MESMA. In this paper, we propose a library augmentation strategy to increase the diversity of existing spectral libraries, thus improving their ability to represent the materials in real images. First, we leverage the power of deep generative models to learn the statistical distribution of the EMs based on the spectral signatures available in the existing libraries. Afterwards, new samples can be drawn from the learned EM distributions and used to augment the spectral libraries, improving the overall quality of the SU process. Experimental results using synthetic and real data attest the superior performance of the proposed method even under library mismatch conditions
    corecore